Goto

Collaborating Authors

 gaussian model






Solving Most Systems of Random Quadratic Equations

Gang Wang, Georgios Giannakis, Yousef Saad, Jie Chen

Neural Information Processing Systems

We put forth a novel procedure, that starts with a weighted maximal correlation initialization obtainable with a few power iterations, followed by successive refinements based on iteratively reweighted gradient-type iterations . The novel techniques distinguish themselves from prior works by the inclusion of a fresh (re)weighting regularization.


Solving Random Systems of Quadratic Equations via Truncated Generalized Gradient Flow

Gang Wang, Georgios Giannakis

Neural Information Processing Systems

The notation φ ( n) = O ( g (n)) means that there is a constant c > 0 such that | φ ( n) | c| g ( n)| . "plain vallina" spectral initialization, its performance still suffers when the number of measurements Stages s1) and s2) are delineated next in reverse order.



Diffusion-DFL: Decision-focused Diffusion Models for Stochastic Optimization

Zhao, Zihao, Yeh, Christopher, Kong, Lingkai, Wang, Kai

arXiv.org Machine Learning

Decision-focused learning (DFL) integrates predictive modeling and optimization by training predictors to optimize the downstream decision target rather than merely minimizing prediction error. To date, existing DFL methods typically rely on deterministic point predictions, which are often insufficient to capture the intrinsic stochasticity of real-world environments. To address this challenge, we propose the first diffusion-based DFL approach, which trains a diffusion model to represent the distribution of uncertain parameters and optimizes the decision by solving a stochastic optimization with samples drawn from the diffusion model. Our contributions are twofold. First, we formulate diffusion DFL using the reparameterization trick, enabling end-to-end training through diffusion. While effective, it is memory and compute-intensive due to the need to differentiate through the diffusion sampling process. Second, we propose a lightweight score function estimator that uses only several forward diffusion passes and avoids backpropagation through the sampling. This follows from our results that backpropagating through stochastic optimization can be approximated by a weighted score function formulation. We empirically show that our diffusion DFL approach consistently outperforms strong baselines in decision quality. The source code for all experiments is available at the project repository: https://github.com/GT-KOALA/Diffusion_DFL.



Probabilistic low-rank matrix completion on finite alphabets

Jean Lafond, Olga Klopp, Eric Moulines, Joseph Salmon

Neural Information Processing Systems

The task of reconstructing a matrix given a sample of observed entries is known as the matrix completion problem . It arises in a wide range of problems, including recommender systems, collaborative filtering, dimensionality reduction, image processing, quantum physics or multi-class classification to name a few. Most works have focused on recovering an unknown real-valued low-rank matrix from randomly sub-sampling its entries. Here, we investigate the case where the observations take a finite number of values, corresponding for examples to ratings in recommender systems or labels in multi-class classification. We also consider a general sampling scheme (not necessarily uniform) over the matrix entries. The performance of a nuclear-norm penalized estimator is analyzed theoretically. More precisely, we derive bounds for the Kullback-Leibler divergence between the true and estimated distributions. In practice, we have also proposed an efficient algorithm based on lifted coordinate gradient descent in order to tackle potentially high dimensional settings.